352 research outputs found

    Putative regulatory role of GlyS antisense RNA in an obligate insect symbiont Buchnera aphidicola

    Get PDF
    My research seeks to answer the question of how small RNAs regulate the gene expression in an uncultivable obligate insect symbiont Buchnera aphidicola, which is important for deeper understanding of the influence of gene regulation on host-symbiont interaction and co-evolution. This presentation will discuss how I apply the novel dual plasmid vector system to the investigation of an uncultivable symbiont gene regulation in vivo. Thus far, the plasmids encoding the antisense RNA (asRNA) of a candidate gene (glyS) has been constructed and transformed into E. coli cells. Next, the DNA coding sequence (CDS) of glyS will be amplified and restrict-digested. This will enable the other plasmids to be constructed with the CDS and transformed into E. coli cells. The activation or inhibition of the gene expression by the asRNA will be measured with the green fluorescent protein (GFP) that is fused with the CDS. The research would lead to more insights on how small RNAs regulate the gene expression of bacteria with reduced genome in the absence of transcription factors and operons. These insights would help us understand the mechanisms of gene regulation in bacteria, which would decipher the genome co-evolution of hosts and symbionts.Ope

    Typical Internal Defects of Gas-Insulated Switchgear and Partial Discharge Characteristics

    Get PDF
    Gas-insulated switchgear (GIS) is a common electrical equipment, which uses sulfur hexafluoride (SF6) as insulating medium instead of traditional air. It has good reliability and flexibility. However, GIS may have internal defects and partial discharge (PD) is then induced. PD will cause great harm to GIS and power system. Therefore, it is of great importance to study the intrinsic characteristics and detection of PD for online monitoring. In this chapter, typical internal defects of GIS and the PD characteristics are discussed. Several detection methods are also presented in this chapter including electromagnetic method, chemical method, and optical method

    Conditional GANs with Auxiliary Discriminative Classifier

    Full text link
    Conditional generative models aim to learn the underlying joint distribution of data and labels to achieve conditional data generation. Among them, the auxiliary classifier generative adversarial network (AC-GAN) has been widely used, but suffers from the problem of low intra-class diversity of the generated samples. The fundamental reason pointed out in this paper is that the classifier of AC-GAN is generator-agnostic, which therefore cannot provide informative guidance for the generator to approach the joint distribution, resulting in a minimization of the conditional entropy that decreases the intra-class diversity. Motivated by this understanding, we propose a novel conditional GAN with an auxiliary discriminative classifier (ADC-GAN) to resolve the above problem. Specifically, the proposed auxiliary discriminative classifier becomes generator-aware by recognizing the class-labels of the real data and the generated data discriminatively. Our theoretical analysis reveals that the generator can faithfully learn the joint distribution even without the original discriminator, making the proposed ADC-GAN robust to the value of the coefficient hyperparameter and the selection of the GAN loss, and stable during training. Extensive experimental results on synthetic and real-world datasets demonstrate the superiority of ADC-GAN in conditional generative modeling compared to state-of-the-art classifier-based and projection-based conditional GANs.Comment: ICML 202

    Decoupled Mixup for Data-efficient Learning

    Full text link
    Mixup is an efficient data augmentation approach that improves the generalization of neural networks by smoothing the decision boundary with mixed data. Recently, dynamic mixup methods have improved previous static policies effectively (e.g., linear interpolation) by maximizing salient regions or maintaining the target in mixed samples. The discrepancy is that the generated mixed samples from dynamic policies are more instance discriminative than the static ones, e.g., the foreground objects are decoupled from the background. However, optimizing mixup policies with dynamic methods in input space is an expensive computation compared to static ones. Hence, we are trying to transfer the decoupling mechanism of dynamic methods from the data level to the objective function level and propose the general decoupled mixup (DM) loss. The primary effect is that DM can adaptively focus on discriminative features without losing the original smoothness of the mixup while avoiding heavy computational overhead. As a result, DM enables static mixup methods to achieve comparable or even exceed the performance of dynamic methods. This also leads to an interesting objective design problem for mixup training that we need to focus on both smoothing the decision boundaries and identifying discriminative features. Extensive experiments on supervised and semi-supervised learning benchmarks across seven classification datasets validate the effectiveness of DM by equipping it with various mixup methods.Comment: The preprint revision, 15 pages, 6 figures. The source code is available at https://github.com/Westlake-AI/openmixu

    Leveraging Graph-based Cross-modal Information Fusion for Neural Sign Language Translation

    Full text link
    Sign Language (SL), as the mother tongue of the deaf community, is a special visual language that most hearing people cannot understand. In recent years, neural Sign Language Translation (SLT), as a possible way for bridging communication gap between the deaf and the hearing people, has attracted widespread academic attention. We found that the current mainstream end-to-end neural SLT models, which tries to learning language knowledge in a weakly supervised manner, could not mine enough semantic information under the condition of low data resources. Therefore, we propose to introduce additional word-level semantic knowledge of sign language linguistics to assist in improving current end-to-end neural SLT models. Concretely, we propose a novel neural SLT model with multi-modal feature fusion based on the dynamic graph, in which the cross-modal information, i.e. text and video, is first assembled as a dynamic graph according to their correlation, and then the graph is processed by a multi-modal graph encoder to generate the multi-modal embeddings for further usage in the subsequent neural translation models. To the best of our knowledge, we are the first to introduce graph neural networks, for fusing multi-modal information, into neural sign language translation models. Moreover, we conducted experiments on a publicly available popular SLT dataset RWTH-PHOENIX-Weather-2014T. and the quantitative experiments show that our method can improve the model

    Editing Language Model-based Knowledge Graph Embeddings

    Full text link
    Recently decades have witnessed the empirical success of framing Knowledge Graph (KG) embeddings via language models. However, language model-based KG embeddings are usually deployed as static artifacts, which are challenging to modify without re-training after deployment. To address this issue, we propose a new task of editing language model-based KG embeddings in this paper. The proposed task aims to enable data-efficient and fast updates to KG embeddings without damaging the performance of the rest. We build four new datasets: E-FB15k237, A-FB15k237, E-WN18RR, and A-WN18RR, and evaluate several knowledge editing baselines demonstrating the limited ability of previous models to handle the proposed challenging task. We further propose a simple yet strong baseline dubbed KGEditor, which utilizes additional parametric layers of the hyper network to edit/add facts. Comprehensive experimental results demonstrate that KGEditor can perform better when updating specific facts while not affecting the rest with low training resources. Code and datasets will be available in https://github.com/zjunlp/PromptKG/tree/main/deltaKG.Comment: Work in progress and the project website is https://zjunlp.github.io/project/KGE_Editing
    corecore